56 research outputs found

    Belief and Truth in Hypothesised Behaviours

    Get PDF
    There is a long history in game theory on the topic of Bayesian or "rational" learning, in which each player maintains beliefs over a set of alternative behaviours, or types, for the other players. This idea has gained increasing interest in the artificial intelligence (AI) community, where it is used as a method to control a single agent in a system composed of multiple agents with unknown behaviours. The idea is to hypothesise a set of types, each specifying a possible behaviour for the other agents, and to plan our own actions with respect to those types which we believe are most likely, given the observed actions of the agents. The game theory literature studies this idea primarily in the context of equilibrium attainment. In contrast, many AI applications have a focus on task completion and payoff maximisation. With this perspective in mind, we identify and address a spectrum of questions pertaining to belief and truth in hypothesised types. We formulate three basic ways to incorporate evidence into posterior beliefs and show when the resulting beliefs are correct, and when they may fail to be correct. Moreover, we demonstrate that prior beliefs can have a significant impact on our ability to maximise payoffs in the long-term, and that they can be computed automatically with consistent performance effects. Furthermore, we analyse the conditions under which we are able complete our task optimally, despite inaccuracies in the hypothesised types. Finally, we show how the correctness of hypothesised types can be ascertained during the interaction via an automated statistical analysis.Comment: 44 pages; final manuscript published in Artificial Intelligence (AIJ

    Predictive Model for Human-Unmanned Vehicle Systems

    Get PDF
    Advances in automation are making it possible for a single operator to control multiple unmanned vehicles. However, the complex nature of these teams presents a difficult and exciting challenge for designers of human–unmanned vehicle systems. To build such systems effectively, models must be developed that describe the behavior of the human–unmanned vehicle team and that predict how alterations in team composition and system design will affect the system’s overall performance. In this paper, we present a method for modeling human–unmanned vehicle systems consisting of a single operator and multiple independent unmanned vehicles. Via a case study, we demonstrate that the resulting models provide an accurate description of observed human-unmanned vehicle systems. Additionally, we demonstrate that the models can be used to predict how changes in the human-unmanned vehicle interface and the unmanned vehicles’ autonomy alter the system’s performance.Lincoln Laborator
    • 

    corecore